1,602 research outputs found
A constraint-based framework to model harmony for algorithmic composition
Music constraint systems provide a rule-based approach to composition. Existing systems allow users to constrain the harmony, but the constrainable harmonic information is restricted to pitches and intervals between pitches. More abstract analytical information such as chord or scale types, their root, scale degrees, enharmonic note representations, whether a note is the third or fifth of a chord and so forth are not supported. However, such information is important for modelling various music theories.
This research proposes a framework for modelling harmony at a high level of abstraction. It explicitly represents various analytical information to allow for complex theories of harmony. It is designed for efficient propagation-based constraint solvers. The framework supports the common 12-tone equal temperament, and arbitrary other equal temperaments. Users develop harmony models by applying user-defined constraints to its music representation.
Three examples demonstrate the expressive power of the framework: (1) an automatic melody harmonisation with a simple harmony model; (2) a more complex model implementing large parts of Schoenbergâs tonal theory of harmony; and (3) a composition in extended tonality. Schoenbergâs comprehensive theory of harmony has not been computationally modelled before, neither with constraints programming nor in any other way.
Teaching ruleâbased algorithmic composition: the PWGL library cluster rules
This paper presents software suitable for undergraduate students to implement computer programs that compose music. The software offers a low floor (students easily get started) but also a high ceiling (complex compositional theories can be modelled). Our students are particularly interested in tonal music: such aesthetic preferences are supported, without stylistically restricting users of the software.
We use a ruleâbased approach (constraint programming) to allow for great flexibility. Our software Cluster Rules implements a collection of compositional rules on rhythm, harmony, melody, and counterpoint for the new music constraint system Cluster Engine by Ărjan Sandred.
The software offers a low floor by observing several guidelines. The programming environment uses visual programming (Cluster Rules and Cluster Engine extend the algorithmic composition system PWGL). Further, music theory definitions follow a template, so students can learn from examples how to create their own definitions. Finally, students are offered a collection of predefined rules, which they can freely combine in their own definitions.
Music Technology students, including students without any prior computer programming experience, have successfully used the software. Students used the musical results of their computer programs to create original compositions.
The software is also interesting for postgraduate students, composers and researchers. Complex polyphonic constraint problems are supported (high ceiling). Users can freely define their own rules and combine them with predefined rules. Also, Cluster Engineâs efficient search algorithm makes advanced problems solvable in practice
Compositions created with constraint programming
This chapter surveys music constraint programming systems, and how composers have used them. The chapter motivates and explains how users of such systems describe intended musical results with constraints. This approach to algorithmic composition is similar to the way declarative and modular compositional rules have successfully been used in music theory for centuries as a device to describe composition techniques. In a systematic overview, this survey highlights the respective strengths of different approaches and systems from a composer's point of view, complementing other more technical surveys of this field. This text describes the music constraint systems PMC, Score-PMC, PWMC (and its successor Cluster Engine), Strasheela and OrchidĂ©e -- most are libraries of the composition systems PWGL or OpenMusic. These systems are shown in action by discussing the composition process of specific works by Jacopo Baboni-Schilingi, Magnus Lindberg, Ărjan Sandred, Torsten Anders, Johannes Kretz and Jonathan Harvey
Recommended from our members
Glacial-cycle simulations of the Antarctic Ice Sheet with the Parallel Ice Sheet Model (PISM) â Part 1: Boundary conditions and climatic forcing
Simulations of the glacialâinterglacial history of the Antarctic Ice Sheet provide insights into dynamic threshold behavior and estimates of the ice sheet's contributions to global sea-level changes for the past, present and future. However, boundary conditions are weakly constrained, in particular at the interface of the ice sheet and the bedrock. Also climatic forcing covering the last glacial cycles is uncertain, as it is based on sparse proxy data.
We use the Parallel Ice Sheet Model (PISM) to investigate the dynamic effects of different choices of input data, e.g., for modern basal heat flux or reconstructions of past changes of sea level and surface temperature. As computational resources are limited, glacial-cycle simulations are performed using a comparably coarse model grid of 16âkm and various parameterizations, e.g., for basal sliding, iceberg calving, or for past variations in precipitation and ocean temperatures. In this study we evaluate the model's transient sensitivity to corresponding parameter choices and to different boundary conditions over the last two glacial cycles and provide estimates of involved uncertainties. We also discuss isolated and combined effects of climate and sea-level forcing. Hence, this study serves as a âcookbookâ for the growing community of PISM users and paleo-ice sheet modelers in general.
For each of the different model uncertainties with regard to climatic forcing, ice and Earth dynamics, and basal processes, we select one representative model parameter that captures relevant uncertainties and motivates corresponding parameter ranges that bound the observed ice volume at present. The four selected parameters are systematically varied in a parameter ensemble analysis, which is described in a companion paper
Modeling of tensile index using uncertain data sets
The objective of this investigation is to analyze and model tensile index. Two approaches are used, one based on training and validation data, while the other novel approach tests models using all possible combinations of data points. This approach is focused on small data sets which have here been obtained from nineteen pulp samples at different refining conditions in a full-scale TMP production line with a CD-76 refiner as a primary stage. From each pulp sample twenty handsheet strips for tensile index measurements were performed. Initially, specific energy and the external variables (dilution water feed rates and plate gaps) are used as predictors in a modeling approach based on an adjusted R 2 {R^{2}} approach. Thereafter, the resulting models are compared with a combination of specific energy and internal variables (primarily consistencies) obtained from temperature measurements inside the refining zones using a soft sensor concept. It is found that specific energy and internal variables as predictors outperform the external variables when estimating tensile index
On the modeling of pulp properties in CTMP processes
The goal of this paper is to model the pulp properties fiber length, shives width and freeness. This will be done utilizing specific energy, flat zone inlet consistency and the internal variables, consistencies and fiber residence times estimated from refining zone soft sensors. The models are designed using more than 3600 hours of data from a RGP82CD refiner. The pulp properties are sampled using a measurement device positioned after the latency chest. Such measurements are noisy and irregularly sampled which opens for a number of challenges to overcome in modeling procedures. In this paper it is shown that the models for shives width and fiber length are capable of predicting most of the major dynamics. However, for freeness no reliable linear models can be derived. When estimating fiber length, the specific energy together with flat zone inlet consistency, fiber residence times and the consistency in the conical zone were the dominant inputs. For shives width it was found that a similar set of inputs resulted in the best models, except that the consistencies during normal process conditions did not significantly influence shives width. Furthermore, fiber residence times were shown to have considerably more pronounced impact on fiber length compared with shives width estimates
Norms, enforcement, and tax evasion
This paper studies individual and social motives in tax evasion. We build a simple dynamic model that incorporates these motives and their interaction. The social motives underpin the role of norms and is the source of the dynamics that we study. Our empirical analysis exploits the adoption in 1990 of a poll tax to fund local government in the UK, which led to widespread evasion. The evidence is consistent with the model's main predictions on the dynamics of evasion
- âŠ